Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Kevin Paul Babu, Megha Raju, Neethu M, Prof. R. Rajaram
DOI Link: https://doi.org/10.22214/ijraset.2023.52720
Certificate: View Certificate
Nowadays motorcycle accidents are increasing day by day. Helmets are the most important and main safety equipment of motorcyclists .But in most of the accident cases, we can observe that most of the motorcyclists might not have worn a helmet. The main aim of wearing helmets is to protect the driver’s head when an accident occurs. The existing system lacks an efficient mechanism for detecting and monitoring violations of motorcycle rules such as riding triples, not wearing helmets, and over-speeding. There is a need for an automated system that can accurately identify and track motorcycles violating the rules, enabling law enforcement agencies to take appropriate action. The current process of manual identification and reporting of violations is time-consuming, prone to errors, and does not provide real-time insights on the frequency and distribution of violations across different areas. Law enforcement authorities require a comprehensive analysis of the violations per area to understand the problem areas and allocate resources effectively for enforcement and awareness campaigns.
I. INTRODUCTION
Motorcycle accidents have become a major public safety concern, with millions of people losing their lives or suffering serious injuries every year. In most cases, these accidents are caused by rule violations by motorcyclists, such as lane splitting, speeding, reckless driving, and riding without a helmet. Despite numerous efforts to improve motorcycle safety, the lack of a comprehensive system for detecting and analyzing these violations has hampered progress. In this paper, we propose the development of an integrated system that can detect and analyze different types of motorcycle rule violations. This system, which we refer to as the Motorcycle Rule Violation Detection System (MRVDS), will employ advanced computer vision techniques ,deep learning and machine learning algorithms to identify and classify different types of violations. The system will also include a data analysis module that will provide insights into the frequency, location, and severity of these violations[9]. There is currently no existing model for detecting violations of motorcycle rules and analyzing them. The proposed goal is to address this limitation.
A highly sophisticated technology-driven system for detecting and analyzing motorcycle rules violation utilizing the YOLOv4 framework seeks to increase road safety by identifying and analyzing motorcycle riders who violate traffic laws. The system accurately identifies and categorizes many forms of motorcycle rule violations, such as helmetless riding, over speeding, triple riding etc using deep learning techniques, particularly YOLOv4 (You Only Look Once version 4),a cutting-edge object identification framework. The algorithm can identify many types of violations and tell them apart from typical driving behavior because it was trained on a big dataset of motorbike photos and videos. The YOLOv4 framework also guarantees that the system is capable of simultaneously detecting and tracking many violations in real-time. In order to enable authorities to take the necessary precautions to prevent violations, the data gathered from the system can be analyzed to get insights into the trends and root causes of those violations. The installation of additional speed cameras or intensified enforcement in such areas may result from the system's ability to determine the times and places with the highest number of violations[5].Despite the fact that this system uses deep learning techniques, it's critical to make sure that it is done in a way that respects individuals' privacy and civil liberties. Overall, the YOLOv4 framework and deep learning motorcycle rules violation detection and analysis system is a useful tool for enhancing traffic safety, lowering motorcycle-related accidents, and enforcing the laws.
The proposed model is done with AI, Machine Learning and Deep learning Traffic concepts. With over 1.25 million fatalities each year, accidents represent a significant and urgent priority for reducing their occurrence. In order to detect speeding, accidents, triple riding, and helmet use, a system has been developed that examines video from traffic surveillance cameras.
A model for detecting objects such as a helmet or bike, YOLO V4 is used in the proposed system[4]. OpenCV library is integrated with YOLO V4 for computer vision and visual data processing.
II. METHODOLOGIES
The main goal is to create a web application where we can track vehicles and save information about vehicles that break traffic laws, including drivers who don't wear helmets, triples, or vehicles that are going too fast. Additionally, we can apply this in public areas and colleges. If it is adopted in colleges, it will be able to identify the owner of the vehicle and time the entry of a vehicle onto campus in order to issue a warning. Our idea may be used to instantly detect helmets by connecting traffic cameras at signals.
Three modules make up the proposed system: a speed estimator, a helmet detector, and a triple riding detector. Initially, there are two categories for the images[5]. By combining two current approaches (CNN & YOLO), the following will be compared to each image: CNN used for feature extraction. The next step is to design a system to enhance the existing system's accuracy and precision[7]. The system collects data from various sources, including CCTV cameras, sensors, and other devices installed on the road network. The data collected includes video footage, images, and other relevant information related to motorcycle riders' behavior.The collected data is processed to remove any irrelevant or redundant information, and to prepare it for further analysis[10]. This step may include image processing, data normalization, and data cleaning.The system uses advanced algorithms, such as deep learning and computer vision, to detect and identify objects, specifically motorcycles and riders, in the collected data.
.
The system can then classify these objects as normal or violations based on predefined criteria.The system analyzes the detected violations, and generates reports and insights on the patterns and causes of the violations. This step can help authorities take appropriate measures to prevent future violations and improve road safety.Based on the system's reports, authorities can take appropriate measures to enforce traffic regulations, such as issuing fines or penalties, installing more cameras, or increasing enforcement efforts in high-violation areas.
A. Speed Detection
DeepSORT is employed in the vehicle tracking process. A novel tracking technique called Deep SORT that builds on Simple Online and Real-time Tracking has produced outstanding results for the Multiple Object Tracking (MOT) problem.The distance to a chosen area of interest is already known. The 'inside flag' is changed to 1 when the vehicle reaches this area. As long as the vehicle is inside of this area, the value of 'frame count' gets increased every frame.
The value of "inside flag" is changed to 0 when the vehicle leaves the area.Then speed can be calculated using the formula
Speed= distance*fps / frame_count
B. Helmet Detection
Custom dataset is prepared. The basic classes in the image dataset are Helmet, No-Helmet, Motorbike class. Dataset consisted of almost 4000 images for training and 1000 images for testing.
Modify the YOLOv4 configuration file for number plate detection: You need to create a custom configuration file based on the YOLOv4 architecture. Modify the following parameters:Change the number of classes to 3 in the yolov4 layers.Update the filters in the [convolutional] layer preceding the [yolo] layer to (classes + 5) * 3 (in this case, filters = 24).Set the max_batches to at 100000 Configure steps to 80% and 90% of max_batches (e.g., steps=8000,9000).Train the YOLOv4 model with the custom dataset: Train the model using the custom configuration file, training dataset, and pre-trained weightsThe object detection model used was yolo-v4. The Images were trained for 10,000 epochs on rtx 3080 gpu . The training accuracy was 84%. The testing accuracy is 80%.The weights and configuration files are saved for deployment.For a given input image, use the trained YOLOv4 model to detect objects belonging to the three classes (bike, hel, and nohel).For each detected bike in the image, calculate the centroid of its bounding box.For each detected nohel object, calculate the centroid of its bounding box[3].
Check if the centroid of the nohel bounding box is inside the bounding box of the bike. If so, The Cropped License Plate is passed to the OCRconsider the rider without a helmet to be on the bike.If a rider without a helmet is detected on a bike, crop the image to focus on that area.[4]The Cropped Bike is passed to the license plate model which detects the number plate and crops it. model which extracts the plate number as text. The corresponding vehicle number and violation is stored in the database along with time
C. Triple Riding
A unique dataset is created. Helmet, No-Helmet, and Motorbike class are the fundamental classes in the image dataset. The dataset included 1000 photos for testing and over 4000 images for training.Modify the YOLOv4 configuration file for number plate detection: You need to create a custom configuration file based on the YOLOv4 architecture[2].
Modify the following parameters:Change the number of classes to 3 in the yolov4 layers.Update the filters in the [convolutional] layer preceding the [yolo] layer to (classes + 5) * 3 (in this case, filters = 24).Set the max_batches to at 100000 Configure steps to 80% and 90% of max_batches (e.g., steps=8000,9000).Train the YOLOv4 model with the custom dataset: Train the model using the custom configuration file, training dataset, and pre-trained weights The Rtx 3080 GPU was used to train the images across 20,000 epochs. 86% of the training was accurate. The test's 82% accuracy rate. The weights and configuration files are saved for deployment.
D. Number Plate Detection
The license plate detection model is trained using a different model. License plates are the classes that are present here. The data set included almost 2500 images for training and 800 images for testing.
The Object Detection Method is also used for Number Plate Detection and cropping[8]. Number plates under various lighting conditions are taken and annotated. Split the dataset into training and validation sets: Divide the dataset into two parts: a training set (80-90%) and a validation set (10-20%). This helps evaluate the performance of the model during training. After training and validation the weights are saved for deployment. Predict bounding boxes around license plate areas in test photos using the trained model. Using a helmet detection model, non-helmet riders can be easily identified. By cropping the image based on the detected class, the license plate of the rider can be seen.
III. MACHINE LEARNING TECHNIQUES
Machine learning has been an important area of focus for decades, but it has gained increased attention due to the current hype around artificial intelligence and its applications. Machine learning techniques can be categorized into supervised, unsupervised, and reinforcement learning, based on whether the data is labeled and whether external input is required.
Supervised learning involves learning with labeled data and can be further divided into regression and classification techniques. Regression aims to find the relationship between dependent and independent variables, while classification techniques classify inputs into different classes based on input features[6]. Training of data is necessary for supervised learning, and the input dataset is divided into training and testing datasets. Unsupervised learning involves learning without labeled data, and clustering is a technique that divides data into groups based on similarity. Association techniques find relationships in input data.
Reinforcement learning involves learning based on the rewards or penalties provided by the system for the output given by the model.
IV. DEEP LEARNING TECHNIQUES
A. You Only Look Once(YOLO)
YOLO (You Only Look Once) is a cutting-edge object detection algorithm that is commonly used in image and video processing applications. It is based on deep learning and is capable of detecting and classifying objects in real-time with a high degree of accuracy. Unlike traditional object detection algorithms, YOLO performs detection and classification in a single step, which makes it faster and more efficient. It uses a grid to divide the input image and predicts bounding boxes and class probabilities for each cell in the grid.
The predictions from multiple cells are then combined to generate the final output. YOLO has many versions, with YOLOv4 being the most recent one that sets new standards in object detection accuracy. YOLOv4 employs advanced techniques, such as a more robust backbone network, data augmentation, and improved loss functions, to enhance its performance and accuracy. YOLOv4 is widely utilized in various computer vision applications, including robotics, surveillance, and traffic monitoring.
B. YOLO V4
The YOLOv4 algorithm, which was released in April 2020, is an improved version of the YOLO family of object detection algorithms. It was developed by Alexey Bochkovskiy, Chien-Yao Wang, and Hong-Yuan Mark Liao, and is based on the SPDarknet53 architecture.
YOLOv4 (You Only Look Once version 4) is a state-of-the-art object detection algorithm used in computer vision applications. It is a deep learning-based algorithm that can detect and classify objects in images or videos with high accuracy and speed.
YOLOv4 uses a single neural network to detect and classify objects in one step, making it faster and more efficient than traditional object detection algorithms. It divides the input image into a grid of cells and predicts bounding boxes and class probabilities for each cell. The predictions from different cells are then combined to produce the final output.
YOLOv4 incorporates several advanced techniques, including a more powerful backbone network, data augmentation, and improved loss functions, to enhance its accuracy and performance. The backbone network, called CSPDarknet53, is a more robust version of the Darknet architecture used in previous versions of YOLO. The data augmentation techniques used in YOLOv4, such as mosaic data augmentation, cut mix data augmentation, and mixup data augmentation, help improve the robustness of the model. The improved loss functions, such as the focal loss function and the binary cross-entropy loss function, help address the issue of class imbalance in object detection. YOLOv4 has achieved state-of-the-art performance on various object detection benchmarks, including the COCO dataset and the PASCAL VOC dataset. It is widely used in various computer vision applications, including surveillance, traffic monitoring, and robotics.YOLOv4 introduces several new concepts, including Weighted Residual Connections, Cross-Stage-Partial connections, cross mini-batch normalization, self-adversarial training, mish activation, dropblock, and CIoU loss. Its architecture includes a backbone based on the SPDarknet53 network, spatial pyramid pooling, a PANet path-aggregation neck, and a YOLOv3 head.
YOLOv4 achieves higher Average Precision and Frames Per Second metrics compared to its predecessor, YOLOv3, with a 10% increase in Average Precision and 12% improvement in Frames Per Second. Within Deep Learning, a Convolutional Neural Network or CNN is a type of artificial neural network, which is widely used for image/object recognition and classification. Deep Learning thus recognizes objects in an image by using a CNN.
The convolutional layers are the key component of a CNN, where filters are applied to the input image to extract features such as edges, textures, and shapes. The output of the convolutional layers is then passed through pooling layers, which are used to down-sample the feature maps, reducing the spatial dimensions while retaining the most important information. The output of the pooling layers is then passed through one or more fully connected layers, which are used to make a prediction or classify the image.
The YOLOv4 algorithm can be used for motorcycle rules violation detection and analysis systems. The system would use a video feed from a camera or multiple cameras placed on the road, which would capture footage of motorcyclists on the road. The YOLOv4 algorithm would then be applied to the video feed to detect and classify different objects in the video, including motorcycles, cars, pedestrians, and traffic signs.
To detect motorcycle rule violations, the system would first need to identify motorcycles in the video feed. YOLOv4 would be trained on a dataset of motorcycle images to recognize various types of motorcycles from different angles, with different backgrounds and lighting conditions. Once the algorithm is trained, it can accurately detect motorcycles in the video feed and classify them based on their make and model.
Next, the YOLOv4 algorithm would be trained on a dataset of traffic signs to recognize different types of traffic signs, including speed limit signs, stop signs, and no entry signs. The algorithm would detect the traffic signs in the video feed and classify them based on their type and meaning.
Once the system has identified the motorcycles and traffic signs, it can analyze the footage to detect potential rule violations, such as speeding, running red lights, or illegal turns. The system can also track the movements of the motorcycles and other vehicles to identify patterns of dangerous driving behavior, such as suddenlane changes or swerving.
Overall, the YOLOv4 algorithm can be a powerful tool for developing a motorcycle rules violation detection and analysis system, helping to improve road safety and prevent accidents.
V. RELATED WORKS
A. An IOT-Based Vehicle Accident Detection And Classification System Using Sensor Fusion
The proposed system is an innovative approach to vehicle classification and speed estimation, which has the potential to improve traffic management and road safety significantly. By utilizing a combination of PIR and ultrasonic sensors, the system can detect and track vehicles in real-time, providing accurate classification and speed estimates. This approach is advantageous over traditional systems that rely on only one type of sensor, as it improves the accuracy and reliability of the system.
The PIR sensors are used to detect the presence of a vehicle, while the ultrasonic sensors measure the distance between the vehicle and the sensor. By analyzing the data from both sensors, the system can determine the size and speed of the vehicle. The system also employs machine learning algorithms to classify the vehicles into different categories based on their size and shape. This classification is useful for traffic management purposes, as it can help optimize traffic flow and reduce congestion.
Furthermore, the system estimates the speed of the vehicles by analyzing the time it takes for them to pass between two sensors. This speed estimation is useful for monitoring traffic flow and enforcing speed limits. By providing real-time monitoring capabilities, the system can help authorities respond quickly to any incidents that may occur on the road.
The machine learning algorithms employed in the system can adapt to changes in traffic patterns and improve accuracy over time. This adaptability is crucial for ensuring that the system remains accurate and effective, even as traffic patterns change.
Overall, the proposed system is an innovative and effective approach to vehicle classification and speed estimation. Its combination of PIR and ultrasonic sensors, along with its real-time monitoring capabilities and machine learning algorithms, has the potential to make a significant impact on improving traffic management and road safety.
B. Vehicle Classification And Speed Estimation Using Combined Passive Infrared/ Ultrasonic Sensors
In the proposed system[2], a new sensing device that can simultaneously monitor traffic congestion and urban flash floods is presented. This sensing device is based on the combination of passive infrared sensors (PIRs) and ultrasonic rangefinder, and is used for real-time vehicle detection, classification, and speed estimation in the context of wireless sensor networks. This framework relies on dynamic Bayesian Networks to fuse heterogeneous data both spatially and temporally for vehicle detection. The wavelet transform based method slightly outperforms the constrained cross correlation method.
C. Lokesh Allamki, Majunath Panchakshari, Ashish Sateesha, Ks Pratheek “Helmet Detection Using Machine Learning And Automatic Number Plate Recognition
This project of automated helmet detection uses methods of machine learning to categorize vehicles as two wheelers or not and if it’s a two wheeler then recognize the head part as the person wearing the helmet or not. If the rider or the pillion is not wearing the helmet then the image of the person with the vehicle is captured. Using different mechanisms, the number plate of the vehicle is recognized as the string of characters and numbers and stored in a database the details of the vehicle number plate and the captured images as the proof. Using this data fines can be imposed on the riders who repeatedly commit the mistake of not wearing the helmet.
The process of classification and descriptors are used to detect the vehicles and then detect the persons with 2 wheelers and detect if they are wearing the helmet or not.
Detection of the background- A reference of the road as background is considered so that the motion of the vehicle can be detected with respect to the stable object (road).
Segmentation of moving objects- Using background subtraction, the moving objects(vehicles) are differentiated with the background which gives only an image of the vehicles and the background will be eliminated.
Vehicle classification- The vehicles are classified as motorcycles or non-motorcycles and a feature vector is obtained for each generated image and passed on to random forest classifier to categorize the vehicle as motorcycle or a non-motorcycle.
Detection of helmet:
Determining RoI-This step is performed so that only the region of interest is chosen which reduces the processing time and increases processing time.
Extracting the features- A sub-window is formed in the above generated RoI and the main part of the image(head in this case) is extracted and passed as input for the classifier to check if the biker has put on his helmet or not. This project/paper does mainly deal with helmet detection. For it to be used in a surveillance system, it should be able to detect the number plate of the vehicle to impose fines on the rider which is lacking in this project.
D. Romuere R.V.E Silva, Kelson R.T. Aires, Rodrigo De M. S. Veras “Detection Of Helmet On Motorcyclists”
Detection of the background-
-Segmentation of moving objects-
-Vehicle classification-
The vehicles are classified as motorcycles or non-motorcycles and a feature vector is obtained for each generated image and passed on to random forest classifiers to categorize the vehicle as motorcycle or a non-motorcycle.
E. Vishnu, Dinesh Singh, C. Krishna Mohan and Sobhan Babu “Detection of Motorcyclists without Helmet in Videos using Convolutional Neural Network”
The use of helmets is critical in reducing the severity of head injuries sustained by motorcyclists in the event of an accident. Despite the widespread awareness campaigns, many motorcyclists still neglect to wear helmets. In recent years, the development of artificial intelligence techniques like convolutional neural networks (CNNs) has allowed for the creation of automated systems that can detect helmet usage in images and videos.
The study "Detection of Motorcyclists without Helmet in Videos using Convolutional Neural Network" by E. Vishnu, Dinesh Singh, C. Krishna Mohan, and Sobhan Babu explores the use of CNNs for detecting motorcyclists without helmets in videos. The research used the Faster R-CNN architecture, which is a state-of-the-art CNN model for object detection tasks.
The research team collected a dataset of videos containing motorcyclists with and without helmets in various conditions such as different backgrounds, angles, and lighting. The dataset was preprocessed by extracting frames from the videos and labeling them according to whether the motorcyclist was wearing a helmet or not. The preprocessed dataset was then used to train the Faster R-CNN model. The training involved adjusting the model's weights through a process called backpropagation, which uses an optimization algorithm to minimize the difference between the predicted and actual labels. The training was repeated multiple times with different hyperparameters and data augmentation techniques to optimize the model's performance.
The trained model was then tested on a separate dataset of videos containing motorcyclists with and without helmets. The model's performance was evaluated using metrics such as precision, recall, and F1 score. The results showed that the model achieved high accuracy in detecting helmet usage in videos.
The study's findings have significant implications for enhancing road safety by enabling automated systems that can detect motorcyclists without helmets in real-time. This technology can be integrated into existing surveillance systems to identify and alert authorities to any helmet-less motorcyclists on the roads. Additionally, this technology can aid in enforcing helmet usage laws and promoting helmet awareness campaigns by providing reliable and objective evidence.
In conclusion, the study by E. Vishnu, Dinesh Singh, C. Krishna Mohan, and Sobhan Babu demonstrates the effectiveness of CNNs in detecting helmet usage in videos of motorcyclists. This technology has the potential to improve road safety by alerting authorities to any helmet-less motorcyclists on the roads and promoting helmet awareness campaigns.
F. A Motorcycle Detection And Classification System For Intelligent Transportation Systems” Published In The 2017 IEEE International Conference On Robotics And Biomimetics
The authors put forth a cutting-edge method for distinguishing motorcyclists from other types of vehicles on the road that mixes deep learning algorithms with conventional image processing methods. Motorcycle detection and motorcycle classification are the system's two key phases. A dataset of actual traffic scenes was used to evaluate the suggested system, and the findings revealed that it was very accurate at identifying and classifying motorcycles. When measured against other cutting-edge techniques, the system excelled in terms of accuracy and speed. The suggested system offers a viable method for identifying and categorizing motorbikes in traffic scenes, which may be included into intelligent transportation systems to enhance traffic management and road safety.
G. "Motorcycle Detection and Tracking in Traffic Videos" By T. Nguyen, M. Nguyen, and h. Luong, published in the 2019 IEEE International Conference on Advanced Technologies for Communications.
The strategy for finding and following motorcycles in traffic footage is suggested in the study. The two key steps of the suggested strategy are motorbike detection and motorcycle tracking.
A deep convolutional neural network (CNN) is utilized in the motorcycle detection step to categorize regions of interest in the input video frames as either motorcycles or non-motorcycles. In order to teach the CNN the characteristics of motorcycles that set them apart from other items in the picture, a sizable collection of annotated motorcycle photographs was used in training.
On a dataset of traffic videos, the proposed method was assessed and contrasted with several state-of-the-art approaches. The results of the experiments demonstrate that the suggested method detects and tracks motorcyclists in traffic footage more accurately.
In conclusion, the research offers a promising method for motorcycle detection and tracking in traffic recordings, which may be beneficial in a number of applications, including traffic monitoring, safety analysis, and autonomous driving.
H. "Real-Time Motorcycle Detection Using Deep Convolutional Neural Networks" BY M. K. KIM, K. Y. KIM, AND J. H. KIM, Published in the 2018 IEEE International Conference On Consumer ElectronicS.
The method described in the research detects motorcycles in a video stream using deep convolutional neural networks (DCNNs) in real-time. The system is made to operate in real-time and has a wide range of uses, including traffic monitoring and surveillance.
The motorcycle candidate generating stage and the motorcycle candidate verification stage make up the suggested system. In the initial stage, the system creates candidate regions in the video stream that may contain motorcycles using a DCNN. The system employs a different DCNN in the second stage to confirm whether or not the candidate regions include motorcycles.
I. A Novel Approach For Motorcycle Detection And Classification" BY J. N. D. GUPTA, S. D. Kumar, and R. K. Kamat
Two parts make up the suggested method: motorbike detection and motorcycle classification. A pre-trained deep convolutional neural network (CNN) is utilized to identify motorcycles in the input image in the first step. To extract characteristics from the input image and identify motorcycles, the authors use a Faster R-CNN network with a ResNet-101 backbone, which has been pre-trained on the COCO dataset. Overall, the research proposes a novel method for computer vision and deep learning-based motorcycle detection and categorization. The suggested approach may be used to increase safety on roads and highways in practical situations.
VI. RESULT
In this paper, the traffic violations detection using the YOLOv4 object detection algorithm[11]. The detection focuses on four primary violations: lack of helmet usage, riding triples on a bike, overspeeding, and accident detection. The performance of the model is evaluated using metrics such as precision, recall, F1-score, and mean Average Precision (mAP).
In order to identify different violations of motorcycle traffic laws, the suggested system functions as an integrated system. The system contains functions like overspeed detection, helmet detection, and triple riding. Both public roads and colleges can use the system. The performance of the suggested system could be enhanced in the future. A system for detecting and analyzing motorcycle rules violations can be a useful tool for enhancing traffic safety. The system can detect non-helmet users with a red bounding box indicating that the traffic rule is violated, else a green bounding box is formed when traffic rules are followed. Once all violations are identified in a specific area, an analysis of that area is generated, highlighting the locations with the highest number of violations. This information can be used by traffic authorities to concentrate their efforts and plan accordingly in those areas with the most violations. For instance, they could increase the number of speed cameras in places where overspeeding is common or enforce harsher penalties for infractions. To protect civil liberties and privacy, it\'s crucial to make sure the system is executed properly. Only the information required for identifying violations should be collected and stored by the system.
[1] Nikhil Kumar, Debopam Acharya, and Divya Lohani,\"An IoT-Based Vehicle Accident Detection and Classification System Using Sensor Fusion,\" in IEEE Internet of Things Journal, vol. 8, no. 2, pp. 869-880, 15 Jan.15, 2021, doi: 10.1109/JIOT.2020.3008896. [2] Enas Odat, Student Member, IEEE, Jeff S. Shamma, Fellow, IEEE, and Christian Claudel,“Vehicle Classification and Speed Estimation Using Combined Passive Infrared/Ultrasonic Sensors,\" in IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 5, pp. 1593-1606, May 2018, doi: 10.1109/TITS.2017.2727224. [3] Md. Syedul Amin, Jubayer Jalil, and M. B. I. Reaz, \"Accident detection and reporting system using GPS, GPRS and GSM technology,\" 2012 International Conference on Informatics, Electronics & Vision (ICIEV), 2012, pp. 640-643, doi: 10.1109/ICIEV.2012.6317382. [4] Durgesh Kumar Yadav,Renu , Ankita, and Iftisham Anjum \"Accident Detection Using Deep Learning,\" 2020 2nd International Conference on Advances in Computing, Communication Control and Networking (ICACCCT), 2020, pp. 232-235, doi: 10.1109/ICACCCN51052.2020.9362808. [5] Dikshant Manocha, Ankita Purkayastha, Yatin Chachra, Namit Rastogi and Varun Goel, \"Helmet Detection Using ML & IoT,\" 2019 International Conference on Signal Processing and Communication (ICSC), 2019, pp. 355-358,doi:10.1109/ICSC45622.2019.8938394. [6] M. Ning, Y. Lu, W. Hou and M. Matskin, \"YOLOv4-object: an Efficient Model and Method for Object Discovery,\" 2021 IEEE 45th Annual Computers, Software, and Applications Conference (COMPSAC), 2021, pp. 31-36,doi: 10.1109/COMPSAC51774.2021.00016. [7] Romuere R.V.e Silva, Kelson R.T. Aires, Rodrigo de M. S. Veras “Detection of Helmet on MotorCyclists” [8] Lokesh Allamki, Manjunath Panchakshari, Ashish Sateesha, K S Pratheek “Helmet detection using machine learning and Automatic Number Plate recognition” [9] R. Silva, K. Aires, T. Santos, K. Abdala, R. Veras and A. Soares, \"Automatic detection of motorcyclists without helmets\", Computing Conf. (CLEI) XXXIX Latin American, pp. 1-7, Oct 2013. [10] J.Cheverton,\"Helmet classification with motorcycle detection and tracking\", Intelligent Transport Systems (IET), vol. 6, no. 3, pp. 259-269, September 2012. [11] Novel Approach For Motorcycle Detection And Classification\" By J. N. D. Gupta, S. D. Kumar, And R. K. Kamat
Copyright © 2023 Kevin Paul Babu, Megha Raju, Neethu M, Prof. R. Rajaram. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET52720
Publish Date : 2023-05-22
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here